Sequential Prediction of Individual Sequences Under General Loss Functions

نویسندگان

  • David Haussler
  • Jyrki Kivinen
  • Manfred K. Warmuth
چکیده

We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) prediction strategies, called experts. By using a general loss function, we generalize previous work on universal prediction, forecasting, and data compression. However, here we restrict ourselves to the case when the comparison class is finite. For a given sequence, we define the regret as the total loss on the entire sequence suffered by the adaptive sequential predictor, minus the total loss suffered by the predictor in the comparison class that performs best on that particular sequence. We show that for a large class of loss functions, the minimax regret is either (log N) or ( p ` log N), depending on the loss function, where N is the number of predictors in the comparison class and ` is the length of the sequence to be predicted. The former case was shown previously by Vovk; we give a simplified analysis with an explicit closed form for the constant in the minimax regret formula, and give a probabilistic argument that shows this constant is the best possible. Some weak regularity conditions are imposed on the loss function in obtaining these results. We also extend our analysis to the case of predicting arbitrary sequences that take real values in the interval [0; 1].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Exchangeability Characterizes Optimality of Sequential Normalized Maximum Likelihood and Bayesian Prediction with Jeffreys Prior

We study online prediction of individual sequences under logarithmic loss with parametric constant experts. The optimal strategy, normalized maximum likelihood (NML), is computationally demanding and requires the length of the game to be known. We consider two simpler strategies: sequential normalized maximum likelihood (SNML), which computes the NML forecasts at each round as if it were the la...

متن کامل

Bayesin estimation and prediction whit multiply type-II censored sample of sequential order statistics from one-and-two-parameter exponential distribution

In this article introduce the sequential order statistics. Therefore based on multiply Type-II censored sample of sequential order statistics, Bayesian estimators are derived for the parameters of one- and two- parameter exponential distributions under the assumption that the prior distribution is given by an inverse gamma distribution and the Bayes estimator with respect to squared error loss ...

متن کامل

Universal Data Compression and Linear Prediction

The relationship between prediction and data compression can be extended to universal prediction schemes and universal data compression. Recent work shows that minimizing the sequential squared prediction error for individual sequences can be achieved using the same strategies which minimize the sequential codelength for data compression of individual sequences. De ning a \probability" as an ex...

متن کامل

On optimal sequential prediction for general processes

This paper considers several aspects of the sequential prediction problem for unbounded, non-stationary processes under p-th power loss lp(u, v) = |u−v|, 1 < p < ∞. In the first part of the paper it is shown that Bayes prediction schemes are Cesaro optimal under general conditions, that Cesaro optimal prediction schemes are unique in a natural sense, and that Cesaro optimality is equivalent to ...

متن کامل

Twice Universal Linear Prediction of Individual Sequences

We present a linear prediction algorithm which is \twice universal," over parameters and model orders, for individual sequences under the square-error loss function. The sequentially accumulated mean-square prediction error is as good as any linear predictor of order up to some M. Following an approach taken in many prediction problems we transform the linear prediction problem into a sequentia...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 44  شماره 

صفحات  -

تاریخ انتشار 1998